469 research outputs found

    A deterministic approach to active debris removal target selection

    Get PDF
    Many decisions, with widespread economic, political and legal consequences, are being considered based on the concerns about the sustainability of spaceflight and space debris simulations that show that Active Debris Removal (ADR) may be necessary.The debris environment predictions are affected by many sources of error, including low-accuracy ephemerides and propagators. This, together with the inherent unpredictability of e.g. solar activity or debris attitude, raises doubts about the ADR target-lists that are produced. Target selection is considered highly important, as removal of non-relevant objects will unnecessarily increase the overall mission cost [1].One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object [2]. To this end, a conjunction detection algorithm, based on the “smart sieve” method, has been developed and utilised with an example snapshot of the public two-line element catalogue. Another algorithm was then applied to the identified conjunctions to estimate the maximum and true probabilities of collisions taking place.Two target-lists were produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities were computed using the maximum probability approach, which is time-invariant, and estimates of the true collision probability that were computed with covariance information.The top-priority targets are compared, and the impacts of the data accuracy and its decay highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is propose

    Constrained optimisation of preliminary spacecraft configurations under the design-for-demise paradigm

    Full text link
    In the past few years, the interest towards the implementation of design-for-demise measures has increased steadily. Most mid-sized satellites currently launched and already in orbit fail to comply with the casualty risk threshold of 0.0001. Therefore, satellites manufacturers and mission operators need to perform a disposal through a controlled re-entry, which has a higher cost and increased complexity. Through the design-for-demise paradigm, this additional cost and complexity can be removed as the spacecraft is directly compliant with the casualty risk regulations. However, building a spacecraft such that most of its parts will demise may lead to designs that are more vulnerable to space debris impacts, thus compromising the reliability of the mission. In fact, the requirements connected to the demisability and the survivability are in general competing. Given this competing nature, trade-off solutions can be found, which favour the implementation of design-for-demise measures while still maintaining the spacecraft resilient to space debris impacts. A multi-objective optimisation framework has been developed by the authors in previous works. The framework's objective is to find preliminary design solutions considering the competing nature of the demisability and the survivability of a spacecraft since the early stages of the mission design. In this way, a more integrated design can be achieved. The present work focuses on the improvement of the multi-objective optimisation framework by including constraints. The paper shows the application of the constrained optimisation to two relevant examples: the optimisation of a tank assembly and the optimisation of a typical satellite configuration.Comment: Pre-print submitted to the Journal of Space Safety Engineerin

    A new analysis of debris mitigation and removal using networks

    No full text
    Modelling studies have shown that the implementation of mitigation guidelines, which aim to reduce the amount of new debris generated on-orbit, is an important requirement of future space activities but may be insufficient to stabilise the near-Earth debris environment. The role of a variety of mitigation practices in stabilising the environment has been investigated over the last decade, as has the potential of active debris removal (ADR) methods in recent work. We present a theoretical approach to the analysis of the debris environment that is based on the study of networks, composed of vertices and edges, which describe the dynamic relationships between Earth satellites in the debris system. Future projections of the 10 cm and larger satellite population in a non-mitigation scenario, conducted with the DAMAGE model, are used to illustrate key aspects of this approach. Information from the DAMAGE projections are used to reconstruct a network in which vertices represent satellites and edges encapsulate conjunctions between collision pairs. The network structure is then quantified using statistical measures, providing a numerical baseline for this future projection scenario. Finally, the impact of mitigation strategies and active debris removal, which can be mapped onto the network by altering or removing edges and vertices, can be assessed in terms of the changes from this baseline. The paper introduces the network methodology, highlights the ways in which this approach can be used to formalise criteria for debris mitigation and removal. It then summarises changes to the adopted network that correspond to an increasing stability and changes that represent a decreasing stability of the future debris environment

    Population vulnerability models for asteroid impact risk assessment

    Get PDF
    An asteroid impact is a low probability event with potentially devastating consequences. The Asteroid Risk Mitigation Optimization and Research (ARMOR) software tool calculates whether a colliding asteroid experiences an airburst or surface impact and calculates effect severity as well as reach on the global map. To calculate the consequences of an impact in terms of loss of human life, new vulnerability models are derived that connect the severity of seven impact effects (strong winds, overpressure shockwave, thermal radiation, seismic shaking, ejecta deposition, cratering, and tsunamis) with lethality to human populations. With the new vulnerability models, ARMOR estimates casualties of an impact under consideration of the local population and geography. The presented algorithms and models are employed in two case studies to estimate total casualties as well as the damage contribution of each impact effect. The case studies highlight that aerothermal effects are most harmful except for deep water impacts, where tsunamis are the dominant hazard. Continental shelves serve a protective function against the tsunami hazard caused by impactors on the shelf. Furthermore, the calculation of impact consequences facilitates asteroid risk estimation to better characterize a given threat, and the concept of risk as well as its applicability to the asteroid impact scenario are presente

    Processing two line element sets to facilitate re-entry prediction of spent rocket bodies from geostationary transfer orbit

    Get PDF
    Predicting the re-entry of space objects enables the risk they pose to the ground population to be managed. The more accurate the re-entry forecast, the more cost-efficient risk mitigation measures can be put in place. However, at present, the only publicly available ephemerides (two line element sets, TLEs) should not be used for accurate re-entry prediction directly. They may contain erroneous state vectors, which need to be filtered out. Also, the object’s physical parameters (ballistic and solar radiation pressure coefficients) need to be estimated to enable accurate propagation. These estimates are only valid between events that change object’s physical properties, e.g. collisions and fragmentations. Thus, these events need to be identified amongst the TLEs. This paper presents the TLE analysis methodology, which enables outlying TLEs and space events to be identified. It is then demonstrated how various TLE filtering stages improve the accuracy of the TLE-based re-entry prediction

    Enhancing spaceflight safety with UOS3 cubesat

    No full text
    Earth orbits are becoming increasingly congested. This will not only impact future space operations but also become a concern for the population on the ground; with more spacecraft being flown, more objects will re-enter the atmosphere in an uncontrolled fashion. Parts of these satellites can reach Earth surface and endanger the ground population (e.g. ROSAT or UARS satellites). A student-run project from the University of Southampton aims to build a 1U cubesat (approx. 10 by 10 by 10 cm satellite), which will gather data that will improve the accuracy of re-entry predictions. The cubesat will record and deliver its position and attitude during the orbital decay, thus providing validation data for re-entry prediction tools. This will reduce the risk to the ground population because more accurate prognoses will allow mitigation measures to be implemented in the areas at risk. The mission could also allow the risk of collision between spacecraft to be estimated more accurately thanks to improvement of the atmospheric models. This would give the decision makers more complete information to use, for instance, in collision avoidance manoeuvre plannin

    Understanding person acquisition using an interactive activation and competition network

    No full text
    Face perception is one of the most developed visual skills that humans display, and recent work has attempted to examine the mechanisms involved in face perception through noting how neural networks achieve the same performance. The purpose of the present paper is to extend this approach to look not just at human face recognition, but also at human face acquisition. Experiment 1 presents empirical data to describe the acquisition over time of appropriate representations for newly encountered faces. These results are compared with those of Simulation 1, in which a modified IAC network capable of modelling the acquisition process is generated. Experiment 2 and Simulation 2 explore the mechanisms of learning further, and it is demonstrated that the acquisition of a set of associated new facts is easier than the acquisition of individual facts in isolation of one another. This is explained in terms of the advantage gained from additional inputs and mutual reinforcement of developing links within an interactive neural network system. <br/

    A temperate former West Antarctic ice sheet suggested by an extensive zone of bed channels

    Get PDF
    Several recent studies predict that the West Antarctic Ice Sheet will become increasingly unstable under warmer conditions. Insights on such change can be assisted through investigations of the subglacial landscape, which contains imprints of former ice-sheet behavior. Here, we present radio-echo sounding data and satellite imagery revealing a series of ancient large sub-parallel subglacial bed channels preserved in the region between the Möller and Foundation Ice Streams, West Antarctica. We suggest that these newly recognized channels were formed by significant meltwater routed along the icesheet bed. The volume of water required is likely substantial and can most easily be explained by water generated at the ice surface. The Greenland Ice Sheet today exemplifies how significant seasonal surface melt can be transferred to the bed via englacial routing. For West Antarctica, the Pliocene (2.6–5.3 Ma) represents the most recent sustained period when temperatures could have been high enough to generate surface melt comparable to that of present-day Greenland. We propose, therefore, that a temperate ice sheet covered this location during Pliocene warm periods

    Downscaling Gridded DEMs Using the Hopfield Neural Network

    Get PDF
    A new Hopfield neural network (HNN) model for downscaling a digital elevation model in grid form (gridded DEM) is proposed. The HNN downscaling model works by minimizing the local semivariance as a goal, and by matching the original coarse spatial resolution elevation value as a constraint. The HNN model is defined such that each pixel of the original coarse DEM is divided into f × f subpixels, represented as network neurons. The elevation of each subpixel is then derived iteratively (i.e., optimized) based on minimizing the local semivariance under the coarse elevation constraint. The proposed HNN model was tested against three commonly applied alternative benchmark methods (bilinear resampling, bicubic and Kriging resampling methods) via an experiment using both degraded and sampled datasets at 20-, 60-, and 90-m spatial resolutions. For this task, a simple linear activation function was used in the HNN model. Evaluation of the proposed model was accomplished comprehensively with visual and quantitative assessments against the benchmarks. Visual assessment was based on direct comparison of the same topographic features in different downscaled images, scatterplots, and DEM profiles. Quantitative assessment was based on commonly used parameters for DEM accuracy assessment such as the root mean square error, linear regression parameters m and b, and the correlation coefficient R. Both visual and quantitative assessments revealed the much greater accuracy of the HNN model for increasing the grid density of gridded DEMs

    Connection between the Accretion Disk and Jet in the Radio Galaxy 3C 111

    Full text link
    We present the results of extensive multi-frequency monitoring of the radio galaxy 3C 111 between 2004 and 2010 at X-ray (2.4--10 keV), optical (R band), and radio (14.5, 37, and 230 GHz) wave bands, as well as multi-epoch imaging with the Very Long Baseline Array (VLBA) at 43 GHz. Over the six years of observation, significant dips in the X-ray light curve are followed by ejections of bright superluminal knots in the VLBA images. This shows a clear connection between the radiative state near the black hole, where the X-rays are produced, and events in the jet. The X-ray continuum flux and Fe line intensity are strongly correlated, with a time lag shorter than 90 days and consistent with zero. This implies that the Fe line is generated within 90 light-days of the source of the X-ray continuum. The power spectral density function of X-ray variations contains a break, with steeper slope at shorter timescales. The break timescale of 13 (+12,-6) days is commensurate with scaling according to the mass of the central black hole based on observations of Seyfert galaxies and black hole X-ray binaries (BHXRBs). The data are consistent with the standard paradigm, in which the X-rays are predominantly produced by inverse Compton scattering of thermal optical/UV seed photons from the accretion disk by a distribution of hot electrons --- the corona --- situated near the disk. Most of the optical emission is generated in the accretion disk due to reprocessing of the X-ray emission. The relationships that we have uncovered between the accretion disk and the jet in 3C 111, as well as in the FR I radio galaxy 3C 120 in a previous paper, support the paradigm that active galactic nuclei and Galactic BHXRBs are fundamentally similar, with characteristic time and size scales proportional to the mass of the central black holeComment: Accepted for publication in ApJ. 18 pages, 17 figures, 11 tables (full machine readable data-tables online in ApJ website
    corecore